Goto

Collaborating Authors

 stochastic architecture


UnderstandingandExploringtheNetworkwith StochasticArchitectures

Neural Information Processing Systems

The predictions provided by different architectures can be further assembled or used to calculate uncertainty estimates, making the prediction model more accurate,robust,andcalibrated.


Understanding and Exploring the Network with Stochastic Architectures

Neural Information Processing Systems

There is an emerging trend to train a network with stochastic architectures to enable various architectures to be plugged and played during inference. However, the existing investigation is highly entangled with neural architecture search (NAS), limiting its widespread use across scenarios. In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem. We first uncover the characteristics of NSA in various aspects ranging from training stability, convergence, predictive behaviour, to generalization capacity to unseen architectures. We identify various issues of the vanilla NSA, such as training/test disparity and function mode collapse, and further propose the solutions to these issues with theoretical and empirical insights. We believe that these results could also serve as good heuristics for NAS. Given these understandings, we further apply the NSA with our improvements into diverse scenarios to fully exploit its promise of inference-time architecture stochasticity, including model ensemble, uncertainty estimation and semi-supervised learning. Remarkable performance (e.g., 2.75% error rate and 0.0032 expected calibration error on CIFAR-10) validate the effectiveness of such a model, providing new perspectives of exploring the potential of the network with stochastic architectures, beyond NAS.


Supplementary Material for: Understanding and Exploring the Network with Stochastic Architectures

Neural Information Processing Systems

In this section, we plot the 5 randomly sampled architectures used in NSA-id in Sec. 5. Figure 1, the 5 architectures are distinct from each other. We provide more results for the training and test behaviour of vanilla NSA and NSA-i in this section. Figure 5: Five randomly sampled architectures used in NSA-id in Sec. 5. The training architecture space consists of 50000 samples. The training architecture space consists of 50000 samples.



Review for NeurIPS paper: Understanding and Exploring the Network with Stochastic Architectures

Neural Information Processing Systems

The authors propose an approach to addressing mode collapse and train/test disparity in deep neural networks, the work includes a good empirical evaluation. The reviewers make a number of suggestions of how the text could be improved which I encourage the authors to take on board.


Understanding and Exploring the Network with Stochastic Architectures

Neural Information Processing Systems

There is an emerging trend to train a network with stochastic architectures to enable various architectures to be plugged and played during inference. However, the existing investigation is highly entangled with neural architecture search (NAS), limiting its widespread use across scenarios. In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem. We first uncover the characteristics of NSA in various aspects ranging from training stability, convergence, predictive behaviour, to generalization capacity to unseen architectures. We identify various issues of the vanilla NSA, such as training/test disparity and function mode collapse, and further propose the solutions to these issues with theoretical and empirical insights.